Skip to content

Conversation

@rochevin
Copy link

@rochevin rochevin commented Oct 7, 2025

Hello again,

Thanks again for answering to my issue.

I noticed that the --supervised_loss_weight parameter is available to modify the supervised loss weight. However, it doesn’t seem to propagate correctly into the NeuralAdmixture class, the code always uses the default value of 100.

I applied a small patch to fix this and confirmed that the parameter now behaves as expected on my dataset.

My commit is just a suggestion, please feel free to use or adapt it if you find it helpful.

Examples

In my dataset, when I'm using the pip installed version of neural-admixture (1.6.7) :

for e in 100 200 500 1000 2000 5000 10000 20000 50000 100000; do
    neural-admixture train --k 11 --data_path ../results/20250801/genotypes/20250801_popref.bed --threads 20 --save_dir . --num_gpus 1 --name neural_20250801_svp_train_500_${e} --pops_path ../results/20250801/genotypes/20250801_popref.neuralpop --supervised_loss_weight ${e} --epochs 500
done
before_code_update_loss_weight

Each facet on the second plot I've show the Q matrix for the training dataset when varying the --supervised_loss_weight parameter but as you can see nothing change. On the top plot you can see the result with classic Admixture as a comparison.

And with the patch I did:

after_code_update_loss_weight

Best regards,
Vincent Rocher

@joansaurina
Copy link
Collaborator

Hello @rochevin,

Thank you for your detailed report and for sharing the patch. You are absolutely right—the --supervised_loss_weight parameter was not propagating correctly, and your fix correctly addresses the issue.

We really appreciate you taking the time to test it on your dataset and confirm that it now behaves as expected. We will include this fix in the next release.

Best regards,

Joan

@rochevin
Copy link
Author

That’s great news!
Thank you for including this patch in the next release.
Best regards,

Vincent

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants